Textual data is everywhere, in email and scientific papers, in online newspapers and e-commerce sites. The Web contains more than 200 terabytes of text not even counting the contents of dynamic textual databases. This enormous source of knowledge is seriously underexploited. Textual documents on the Web are very hard to model computationally: they are mostly unstructured, time-dependent, collectively authored, multilingual, and of uneven importance.
Traditional grammar-based techniques don't scale up to address such problems. Novel representations and analytical tools are needed. I will discuss several recent contributions related to text mining from a variety of genres. More specifically these include (a) lexical models of the growth of the Web, (b) graph-based entity classification,